78 research outputs found

    Investigation of Glacial Dynamics in the Lambert Glacier-Amery Ice Shelf System (LAS) Using Remote Sensing

    Get PDF
    Numerous recent studies have documented dynamic changes in the behaviors of large marine-terminating outlet glaciers and ice streams in Greenland, the Antarctic Peninsula, and West Antarctica. However, fewer observations of outlet glaciers and ice shelves exist for the East Antarctic Ice Sheet. In addition, most recent surface velocity mappings of the Lambert Glacier-Amery Ice Shelf system (LAS) are derived for the time period of 1997-2000. From this research, surface velocity measurements provide a more extended view of the behavior and stability of the LAS over the past two decades than can be gleaned from a single observational period. This study uses remote sensing to investigate whether significant changes in velocities have occurred from the late 1980’s through the late 2010’s and assesses the magnitude of mass balance changes observed at the grounding line. To accomplish this goal, surface velocities of the LAS from late 1980’s to late 2010’s for three separate time periods are measured. The observed surface velocities of the LAS ranged from 0 to 1300 m yr^-1 during 1988-1990. A slight slowing down is detected in the central Amery Ice Shelf front by analyzing the surface velocity measurements made along the centerlines. The mass balance is the difference between snow accumulation and the outflux of the grounded LAS and is calculated for individual sub-basin during the three time intervals of 1988-1990, 1999-2004, and 2007-2011 to illustrate the mass balance variation under sub-basin level. The flux gates of the Lambert Glacial sub-basin combined with the Mellor Glacial and the Fisher Glacial sub-basin appear to be the largest outlet of the grounded ice of the LAS. The ice mass transported from the interior region through the three flux gates in total is 43.58 Gt yr^-1, 36.72 Gt yr^-1, and 38.61 Gt yr^-1 respectively for the three time intervals above. The sub-basins in the eastern side appear differently than the western side. The outfluxes of the eastern sub-basins vary from 15.85 to 18.64 Gt yr^-1, while the western outfluxes vary from 15.85 to 18.64 Gt yr^-1. The grounded LAS has discharged ice from 84.55 to 81.60 Gt yr^-1 and to 79.20 Gt yr^-1 during 1980s-1990s and 1990s-2000s. Assuming the snow accumulation distribution is stable, the grounded LAS mass lose has increased 2.95 Gt yr^-1 from 1980s to 1990s and 2.40 Gt yr^-1 from 1990s to 2000s. These results indicate insight into the stability of the Amery Ice Shelf over the last few decades

    Parallel implementation of 3D global MHD simulations for Earth’s magnetosphere

    Get PDF
    AbstractThis paper presents a dynamic domain decomposition (D3) technique for implementing the parallelization of the piecewise parabolic method (PPM) for solving the ideal magnetohydrodynamics (MHD) equations. The key point of D3 is distributing the work dynamically among processes during the execution of the PPM algorithm. This parallel code utilizes D3 with a message passing interface (MPI) in order to permit efficient implementation on clusters of distributed memory machines and may also simultaneously exploit threading for multiprocessing shared address space architectures. 3D global MHD simulation results for the Earth’s magnetosphere on the massively parallel supercomputers Deepcomp 1800 and 6800 demonstrate the scalability and efficiency of our parallelization strategy

    Resource Allocation for Capacity Optimization in Joint Source-Channel Coding Systems

    Full text link
    Benefited from the advances of deep learning (DL) techniques, deep joint source-channel coding (JSCC) has shown its great potential to improve the performance of wireless transmission. However, most of the existing works focus on the DL-based transceiver design of the JSCC model, while ignoring the resource allocation problem in wireless systems. In this paper, we consider a downlink resource allocation problem, where a base station (BS) jointly optimizes the compression ratio (CR) and power allocation as well as resource block (RB) assignment of each user according to the latency and performance constraints to maximize the number of users that successfully receive their requested content with desired quality. To solve this problem, we first decompose it into two subproblems without loss of optimality. The first subproblem is to minimize the required transmission power for each user under given RB allocation. We derive the closed-form expression of the optimal transmit power by searching the maximum feasible compression ratio. The second one aims at maximizing the number of supported users through optimal user-RB pairing, which we solve by utilizing bisection search as well as Karmarka' s algorithm. Simulation results validate the effectiveness of the proposed resource allocation method in terms of the number of satisfied users with given resources.Comment: 6 pages, 6 figure

    MIMO Precoding Design with QoS and Per-Antenna Power Constraints

    Full text link
    Precoding design for the downlink of multiuser multiple-input multiple-output (MU-MIMO) systems is a fundamental problem. In this paper, we aim to maximize the weighted sum rate (WSR) while considering both quality-of-service (QoS) constraints of each user and per-antenna power constraints (PAPCs) in the downlink MU-MIMO system. To solve the problem, we reformulate the original problem to an equivalent problem by using the well-known weighted minimal mean square error (WMMSE) framework, which can be tackled by iteratively solving three subproblems. Since the precoding matrices are coupled among the QoS constraints and PAPCs, we adopt alternating direction method of multipliers (ADMM) to obtain a distributed solution. Simulation results validate the effectiveness of the proposed algorithm

    Theoretical Evaluation of Anisotropic Reflectance Correction Approaches for Addressing Multi-Scale Topographic Effects on the Radiation-Transfer Cascade in Mountain Environments

    Get PDF
    Research involving anisotropic-reflectance correction (ARC) of multispectral imagery to account for topographic effects has been ongoing for approximately 40 years. A large body of research has focused on evaluating empirical ARC methods, resulting in inconsistent results. Consequently, our research objective was to evaluate commonly used ARC methods using first-order radiation-transfer modeling to simulate ASTER multispectral imagery over Nanga Parbat, Himalaya. Specifically, we accounted for orbital dynamics, atmospheric absorption and scattering, direct- and diffuse-skylight irradiance, land cover structure, and surface biophysical variations to evaluate their effectiveness in reducing multi-scale topographic effects. Our results clearly reveal that the empirical methods we evaluated could not reasonably account for multi-scale topographic effects at Nanga Parbat. The magnitude of reflectance and the correlation structure of biophysical properties were not preserved in the topographically-corrected multispectral imagery. The CCOR and SCS+C methods were able to remove topographic effects, given the Lambertian assumption, although atmospheric correction was required, and we did not account for other primary and secondary topographic effects that are thought to significantly influence spectral variation in imagery acquired over mountains. Evaluation of structural-similarity index images revealed spatially variable results that are wavelength dependent. Collectively, our simulation and evaluation procedures strongly suggest that empirical ARC methods have significant limitations for addressing anisotropic reflectance caused by multi-scale topographic effects. Results indicate that atmospheric correction is essential, and most methods failed to adequately produce the appropriate magnitude and spatial variation of surface reflectance in corrected imagery. Results were also wavelength dependent, as topographic effects influence radiation-transfer components differently in different regions of the electromagnetic spectrum. Our results explain inconsistencies described in the literature, and indicate that numerical modeling efforts are required to better account for multi-scale topographic effects in various radiation-transfer components.Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]

    An integrated transcriptomic and computational analysis for biomarker identification in gastric cancer

    Get PDF
    This report describes an integrated study on identification of potential markers for gastric cancer in patients’ cancer tissues and sera based on: (i) genome-scale transcriptomic analyses of 80 paired gastric cancer/reference tissues and (ii) computational prediction of blood-secretory proteins supported by experimental validation. Our findings show that: (i) 715 and 150 genes exhibit significantly differential expressions in all cancers and early-stage cancers versus reference tissues, respectively; and a substantial percentage of the alteration is found to be influenced by age and/or by gender; (ii) 21 co-expressed gene clusters have been identified, some of which are specific to certain subtypes or stages of the cancer; (iii) the top-ranked gene signatures give better than 94% classification accuracy between cancer and the reference tissues, some of which are gender-specific; and (iv) 136 of the differentially expressed genes were predicted to have their proteins secreted into blood, 81 of which were detected experimentally in the sera of 13 validation samples and 29 found to have differential abundances in the sera of cancer patients versus controls. Overall, the novel information obtained in this study has led to identification of promising diagnostic markers for gastric cancer and can benefit further analyses of the key (early) abnormalities during its development

    Quantitative Deep Sequencing Reveals Dynamic HIV-1 Escape and Large Population Shifts during CCR5 Antagonist Therapy In Vivo

    Get PDF
    High-throughput sequencing platforms provide an approach for detecting rare HIV-1 variants and documenting more fully quasispecies diversity. We applied this technology to the V3 loop-coding region of env in samples collected from 4 chronically HIV-infected subjects in whom CCR5 antagonist (vicriviroc [VVC]) therapy failed. Between 25,000–140,000 amplified sequences were obtained per sample. Profound baseline V3 loop sequence heterogeneity existed; predicted CXCR4-using populations were identified in a largely CCR5-using population. The V3 loop forms associated with subsequent virologic failure, either through CXCR4 use or the emergence of high-level VVC resistance, were present as minor variants at 0.8–2.8% of baseline samples. Extreme, rapid shifts in population frequencies toward these forms occurred, and deep sequencing provided a detailed view of the rapid evolutionary impact of VVC selection. Greater V3 diversity was observed post-selection. This previously unreported degree of V3 loop sequence diversity has implications for viral pathogenesis, vaccine design, and the optimal use of HIV-1 CCR5 antagonists

    The Changing Landscape for Stroke\ua0Prevention in AF: Findings From the GLORIA-AF Registry Phase 2

    Get PDF
    Background GLORIA-AF (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation) is a prospective, global registry program describing antithrombotic treatment patterns in patients with newly diagnosed nonvalvular atrial fibrillation at risk of stroke. Phase 2 began when dabigatran, the first non\u2013vitamin K antagonist oral anticoagulant (NOAC), became available. Objectives This study sought to describe phase 2 baseline data and compare these with the pre-NOAC era collected during phase 1. Methods During phase 2, 15,641 consenting patients were enrolled (November 2011 to December 2014); 15,092 were eligible. This pre-specified cross-sectional analysis describes eligible patients\u2019 baseline characteristics. Atrial fibrillation disease characteristics, medical outcomes, and concomitant diseases and medications were collected. Data were analyzed using descriptive statistics. Results Of the total patients, 45.5% were female; median age was 71 (interquartile range: 64, 78) years. Patients were from Europe (47.1%), North America (22.5%), Asia (20.3%), Latin America (6.0%), and the Middle East/Africa (4.0%). Most had high stroke risk (CHA2DS2-VASc [Congestive heart failure, Hypertension, Age  6575 years, Diabetes mellitus, previous Stroke, Vascular disease, Age 65 to 74 years, Sex category] score  652; 86.1%); 13.9% had moderate risk (CHA2DS2-VASc = 1). Overall, 79.9% received oral anticoagulants, of whom 47.6% received NOAC and 32.3% vitamin K antagonists (VKA); 12.1% received antiplatelet agents; 7.8% received no antithrombotic treatment. For comparison, the proportion of phase 1 patients (of N = 1,063 all eligible) prescribed VKA was 32.8%, acetylsalicylic acid 41.7%, and no therapy 20.2%. In Europe in phase 2, treatment with NOAC was more common than VKA (52.3% and 37.8%, respectively); 6.0% of patients received antiplatelet treatment; and 3.8% received no antithrombotic treatment. In North America, 52.1%, 26.2%, and 14.0% of patients received NOAC, VKA, and antiplatelet drugs, respectively; 7.5% received no antithrombotic treatment. NOAC use was less common in Asia (27.7%), where 27.5% of patients received VKA, 25.0% antiplatelet drugs, and 19.8% no antithrombotic treatment. Conclusions The baseline data from GLORIA-AF phase 2 demonstrate that in newly diagnosed nonvalvular atrial fibrillation patients, NOAC have been highly adopted into practice, becoming more frequently prescribed than VKA in Europe and North America. Worldwide, however, a large proportion of patients remain undertreated, particularly in Asia and North America. (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients With Atrial Fibrillation [GLORIA-AF]; NCT01468701

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
    corecore